- Your Meta AI prompts might be public - here's how to check
- Paid proxy servers vs free proxies: Is paying for a proxy service worth it?
- Former CISA and NCSC Heads Warn Against Glamorizing Threat Actor Names
- Some apps are battery vampires - how to root them out and shut them down
- How Booking.com measures the impact of GenAI on developer productivity
How Apple just changed the developer world with this one AI announcement

This is big. Really, really big. It’s subtle. It’s probably not what you think. And it’s going to take a few minutes to explain.
Before I deconstruct the strategic importance of this move, let’s discuss what “it” is. Briefly, Apple is providing access to its on-device AI large language model (LLM) to developers. I can hear you all saying, “That’s it? That’s this big thing? Developers have had access to AI LLMs since there were AI LLMs. Are you saying it’s big because it’s from Apple? Fan boy! Nyah-nyah.”
No. That’s not it. I’m not an Apple fan boy. And I certainly don’t bleed in six colors.
Also: The best AI for coding in 2025 (including a new winner – and what not to use)
Another group of you is probably thinking, “Wait. What? AI from Apple? The last we looked, on the number line between barf and insanely great, Apple Intelligence was about two-thirds of the way toward barf.”
Yeah, I have to agree. Apple Intelligence has been a big nothingburger. I even wrote an entire article about how uninteresting and yawn-inducing Apple Intelligence has been. I still think that. But the fact that Apple’s branding team oversold a feature set doesn’t detract from the seismic change that Apple has just announced.
Developers, developers, developers
I know. In this context, bringing Steve Ballmer’s famous rant into an Apple story is like telling someone, “Live long and may the Force be with you.” But this is a developer story. Let’s be clear: everything about the modern Apple ecosystem is really a developer story.
In fact, everything about the modern world is, fundamentally, a developer story.
Also: Everything announced at Apple’s WWDC 2025 keynote: Liquid Glass, MacOS Tahoe, and more
It’s hard to deny the fact that code rules the world. Nearly everything we do, and certainly all of our communications, supply chain, and daily-life ecosystem, revolves around software. We became a highly connected, mobile-computing-centric society when the smartphone became a permanent appendage to the human body in 2008 or so.
But it wasn’t the generic smartphone. It wasn’t even the iPhone that changed everything. It was the App Store. Prior to the App Store, you needed some level of geek skills to install software. That meant there was friction between having an idea for software and installing it.
Developers had to find users, manage distribution channels, and eventually sell their goods. When I started my first software company, I faced a number of barriers to entry: printing packaging cost tens of thousands of dollars per title; convincing a distributor and retailer to carry it; and then there was warehousing, shipping, assembly, and a variety of other physical supply-chain issues. Most developers only got to keep 30-40% of the eventual retail price of the product; distributors and retailers got the rest.
Also: The 7 best AI features announced at Apple’s WWDC that I can’t wait to use
Then came the App Store. First, we could sell software for as little as a buck, which could still be profitable. There were no production costs, no cost to print disks or disk labels, no labor to put labels on the disks or prepare them for shipping, and no shipping costs. Users didn’t have to find some “computer kid” to install the software — they just pushed a button and it installed. Developers who sold through the channel got to keep 70% of the revenue instead of just 30 or 40%.
Back when the App Store launched, I created 40 pinpoint iPhone apps. I didn’t make enough to give up my day job, but I did make a few thousand bucks in profit. Before the App Store, it would have been impossible to create 40 little apps — impossible to get shelf space, afford production, price them at a buck, or make a profit.
The App Store removed all that friction, and the number of available apps ballooned into the millions. Anybody, anywhere, with a computer and a little programming skill, could — and still can — create an app, get distribution, sell some, and make a profit. Anyone.
Also: Is ChatGPT Plus still worth $20 when the free version packs so many premium features?
Keep in mind that the power of the iPhone and of Android is the developer long tail. Sure, we all have the Facebook and Instagram apps on our phones. We probably all have a few of the big apps like Uber and Instacart.
But it’s not billion-dollar apps that make the platform; it’s the tons and tons of little specialty apps, some of which broke out and became big apps. It’s the fact that anyone can make an app, can afford to make an app, and can afford to get that app into distribution.
It’s not that the App Store lowered the barrier to entry. It’s that the App Store effectively removed any financial barrier to entry at all.
And then came AI
Well, technically, AI has been with us for fifty years or more. The big change is generative AI. ChatGPT, and its desperate competitor clones, changed things once again.
I don’t need to go into the mega-changes we’ve been seeing due to the emergence of generative AI. We cover that every day here at ZDNET. Just about every other publication on the planet is also covering AI in depth.
Also: Your favorite AI chatbot is lying to you all the time
The thing is, AI is bonkers expensive. Cited in a really interesting ZDNET article on AI energy use, Boston Consulting Group estimates that AI data centers will use about 7.5% of America’s energy supply within four years. AI data centers are huge and enormously expensive to either build out or rent. Statista cites OpenAI’s Sam Altman as saying that GPT-4, the LLM inside ChatGPT, cost more than $100 million.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
While most chatbots based on LLMs have free tiers, those tiers are often fairly limited in what they can do and how often they do it. They’re loss leaders designed to get consumers used to the idea of AI so they eventually become customers.
The real business is in licensing. You can oversimplify the AI business by breaking it into two categories: those who create the LLMs, and those who license the LLMs for use in their apps.
Also: Your iPhone will translate calls and texts in real time, thanks to AI
AI companies (those who make the LLMs) base their business models on the premise that other developers will want the benefits of generative AI for their software products. Few developers want to take on the expense of developing an AI from scratch, so they license API calls from the AI companies, effectively paying based on usage.
This makes adding AI to an app absurdly easy. The bulk of the effort is in authenticating the app’s right to access the AI. Then the app just sends a prompt as an API parameter, and the AI returns either plain text or structured text as a response.
There are two main gotchas. First, whatever your app sends to the AI is being sent to the AI — there’s a privacy issue there. But more to the point, developers have only four business-model options for incorporating AI via API calls in their products:
- Charge a subscription fee, usually monthly, to users for the AI service. If done right, this upsell can prove quite profitable. But it’s a user barrier to entry.
- Bury the AI costs in the monthly fee already being charged to customers for product use. Many developers aren’t choosing this path because some customers are willing to pay extra for the AI features themselves. This provides an extra profit center for the developers.
- Charge a per-usage fee, somehow convincing users to buy usage credits that get used up as they use the AI features. This is even more challenging to get users to buy into.
- Eat the API costs and give users the AI features at no additional charge. The big risk here is that developers must carefully pre-calculate costs so that there are no “whales” whose usage tips the cost scale over the value of the sale. This is very risky.
In all of these cases, the AI becomes a transactional expense. The AI features presented to customers have to provide big enough value (or be spun as having big enough value) to convince customers to spend for them. If developers eat the AI API fees themselves, the app has to be profitable enough for the developer to include those fees in their cost of goods.
And, again, all of this has privacy concerns on top of the expense barrier to entry.
Apple’s big move
If you think about it, the App Store removed barriers to entry. It removed friction. It removed the friction consumers felt in having to actually do software installation. And it removed tons of developer friction in bringing a product to market.
Removing friction changed the software world as we know it.
Now, with its iOS 26, iPadOS 26, MacOS 26, VisionOS 26, WatchOS 26, and TVOS 26 announcements, Apple is removing the friction involved in adding AI to apps.
Completely.
Also: Apple’s secret sauce is exactly what AI is missing
It’s not that coding the AI into apps has been hard. No, it’s that the business model has had a fairly high coefficient of friction. In fact, if you wanted to add AI, you had to deal with business-model issues.
No longer, at least in the Apple ecosystem.
Apple has announced that its Foundation Model Framework (essentially an LLM) is available on-device (solving the privacy issue) and at no charge (solving the business-model issue).
It’s the no-charge part of this that has me saying this is a revolutionary change. Up until now, if you wanted to add AI to your app, you really had to justify it. You had to have something big enough that you thought you could get an ROI from that investment.
This is Apple’s sample code for calling their AI and getting a value back.
Image: Apple
But now, developers can add AI to any of their apps like any other feature they include. You wouldn’t expect a developer to have to do a business-model ROI analysis to add a drop-down menu or a pop-up calendar to an app. But with AI, developers have had to give it that extra thought, incur that extra friction.
Now, any time a developer is coding along and thinking, “Ya know, an AI prompt would make this work better,” the developer can add that prompt. Boom. Just part of the coding process.
Also: Cisco rolls out AI agents to automate network tasks at ‘machine speed’ – with IT still in control
For the big developers, this change won’t mean all that much. But for the small and independent developers, this is huge. It means we’ll start to see little bits of AI all through our apps, just helping out wherever a developer thinks it might help.
Want to have some smart tags assigned to that note? Just feed a prompt to the Foundation Model API. Want to know if there are two shoes or a shoe and a handbag in that picture? Just feed the bitmap to the model. Want to generate a quick thumbnail for something? Just feed a prompt to the model. Want to have better dialog from your NPCs in your little casual game? Just ask the AI model.
There’s zero monetary investment required to get the AI services back out. Now, sure, the elephant in the room is that Apple’s AI models are fairly meh. But the company is always improving. Those models will get better, year after year. So developers get quick, free AI code now. In a year or two, they get quick, free, really good AI code.
Also: My new favorite iOS 26 feature is a supercharged version of Google Lens – and it’s easy to use
Let’s also not forget the privacy benefits. All this is done on-device. That means the knowledge base won’t be as extensive as ChatGPT’s, but it also means your musings about whether you ate too many pizzas this week, your crush on someone, or your worries about a possible health scare remain private. They won’t make it into some giant bouillabaisse of knowledge shared by the big AI companies.
For some developers, this can be huge. For example, Automattic (the WordPress company) has an unrelated app called Day One, which is a journaling tool. You definitely don’t want your private journaling thoughts shared with some giant AI in the cloud.
“The Foundation Model framework has helped us rethink what’s possible with journaling,” said Paul Mayne, head of Day One at Automattic. “Now we can bring intelligence and privacy together in ways that deeply respect our users.”
Next year at this time, I’ll bet we see AI embedded in tons of ways we’ve never even thought of before now. That’s why I think Apple’s new developer AI tools could be the biggest thing for apps since apps.
Some Xcode improvements
Before we wrap this article, I want to mention that at its Platforms State of the Union, Apple announced some improvements to Xcode, the company’s development environment.
The company has integrated the now-typical AI coding tools into Xcode 26, allowing developers to ask an AI for help in coding, ask it to write code chunks, and more.
Also: How AI coding agents could destroy open source software
One feature I thought was interesting is that Apple has made Xcode 26 AI-agnostic. You can use whatever LLM you want in the chat section of Xcode. If you’re using ChatGPT, you can use the free version or, if you have a paid tier, one of those. The company said you can use other models as well, but they only discussed the Anthropic models in the Platforms State of the Union session.
In keeping with our previous AI discussion, Apple also said you can run models locally on your Mac, so your code doesn’t have to be sent to a cloud-based AI. That could be very important if you’re working under an NDA or other code-sharing restriction.
Apple Intelligence is still a letdown
Look, Apple Intelligence is still a disappointment. While Apple announced more Apple Intelligence features, there was a reason Apple focused on their liquid-glass and mirrors, and shiny new user-interface elements. It’s something it does well.
Face it. Nobody was asking Apple when it’d make glowing, liquid-like UI puddles. Everyone was wondering when Apple would catch up with Google, Microsoft, and especially OpenAI.
Also: The 7 best AI features announced at Apple’s WWDC that I can’t wait to use
It’s definitely not there this year. But I do think that a fairly competent AI model for apps — which is what Foundation Model offers — will transform the types of features developers add to their code. And that is game-changing, even if it’s not as flashy as what Apple usually puts out.
What do you think about Apple’s move to offer on-device AI tools for free? Will it change how developers approach app design? Are you more likely to add AI features to your own projects now that the business and privacy barriers are lower? Do you see this as a meaningful shift in the mobile-app ecosystem, or is it just Apple playing catch-up? Let us know in the comments below.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.